Ans: In data-warehousing architecture, ETL is an important component, which manages the data for any business process. ETL stands for Extract, Transform, and Load. The extract does the process of reading data from a database. Transform does the converting of data into a format that could be appropriate for reporting and analysis. While load does the process of writing the data into the target database.
Ans: ETL testing includes
Ans: The types of data warehouse applications are
Data mining can be defined as the process of extracting hidden predictive information from large databases and interpreting the data while data warehousing may make use of a data mine for analytical processing of the data in a faster way. Data warehousing is the process of aggregating data from multiple sources into one common repository
Ans:
Ans:
It is a central component of a multi-dimensional model that contains the measures to be analyzed. Facts are related to dimensions.
Types of facts are
Ans:
Ans: The tracing level is the amount of data stored in the log files. The tracing levels can be classified into two Normal and Verbose. The normal level explains the tracing level in a detailed manner while verbose explains the tracing levels at each and every row.
Ans: Grain fact can be defined as the level at which factual information is stored. It is also known as Fact Granularity
Ans:
A fact table without measures is known as a Factless fact table. It can view the number of occurring events. For example, it is used to record an event such as the employee count in a company.
The numeric data based on columns in a fact table is known as Measures
Ans: A transformation is a repository object which generates, modifies, or passes data. Transformation is of two types Active and Passive
Ans: The Lookup Transformation is useful for
Ans:
Round-Robin Partitioning:
Hash Partitioning:
Ans: The advantage of using the DataReader Destination Adapter is that it populates an ADO recordset (consisting of records and columns) in memory and exposes the data from the DataFlow task by implementing the DataReader interface so that another application can consume the data.
Inclined to build a profession as ETL Testing Developer? Then here is the blog post on, explore ETL Testing Training |
Ans: To update the table using SSIS the possible ways are:
Ans: In case you have a non-OLEBD source for the lookup then you have to use Cache to load data and use it as a source
Ans:
Ans:
Connected Lookup | Unconnected Lookup |
Connected lookup participates in mapping | It is used when a lookup function is used instead of an expression transformation while mapping |
Multiple values can be returned | Ony returns one output port |
It can be connected to another transformation and returns a value | Another transformation cannot be connected |
Static or Dynamic cache can be used for connected Lookup | Unconnected as only static cache |
Connected lookup supports user-defined default values | Unconnected lookup does not support user-defined default values |
In Connected Lookup, multiple columns can be returned from the same row or inserted into the dynamic lookup cache | Unconnected lookup designates one return port and returns one column from each row |
Ans: A data source view allows for defining the relational schema which will be used in the analysis services databases. Rather than directly from data source objects, dimensions, and cubes are created from data source views.
Ans: The difference between ETL and OLAP tools is that
Example: Data stage, Informatica, etc.
Example: Business Objects, Cognos, etc.
Ans:
Ans:
Power Center | Power Mart |
Suppose to process a huge volume of data | Suppose to process a low volume of data |
It supports ERP sources such as SAP, People soft, etc. | It does not support ERP sources |
It supports local and global repository | It supports the local repository |
It converts local into a global repository | It had no specification to covert local into a global repository |
Ans: Data staging is an area where you hold the data temporarily on the data warehouse server. Data staging includes the following steps
Ans: In the various business process to identify the common dimensions, BUS schema is used. It comes with a conformed dimension along with a standardized definition of information
Ans: Data purging is a process of deleting data from the data warehouse. It deletes junk data like rows with null values or extra spaces.
Ans: Schema objects are the logical structure that directly refers to the database data. Schema objects include tables, views, sequence synonyms, indexes, clusters, function packages, and database links
Ans:
Ans: Calculation Bug, User Interface Bug, Source Bugs, Load condition bug, ECP related bug.
You liked the article?
Like: 1
Vote for difficulty
Current difficulty (Avg): Medium
TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills in the market.